自主代理需要自定位才能在未知环境中导航。他们可以使用视觉进程(VO)来估计自我运动并使用视觉传感器定位自己。作为惯性传感器或滑板作为轮编码器,这种运动估算策略不会因漂移而受到损害。但是,带有常规摄像机的VO在计算上是要求的,它限制了其在严格的低延迟, - 内存和 - 能量要求的系统中的应用。使用基于事件的相机和神经形态计算硬件为VO问题提供了有希望的低功率解决方案。但是,VO的常规算法不容易转换为神经形态硬件。在这项工作中,我们提出了一种完全由适合神经形态实现的神经元构件构建的VO算法。构建块是代表向量符号体系结构(VSA)计算框架中向量的神经元组,该框架是作为编程神经形态硬件的抽象层提出的。我们提出的VO网络生成并存储了对展示的视觉环境的工作记忆。它更新了此工作内存,同时估计相机的位置和方向的变化。我们证明了如何将VSA作为神经形态机器人技术的计算范式借用。此外,我们的结果代表了使用神经形态计算硬件进行快速和效率的VO以及同时定位和映射(SLAM)的相关任务的重要步骤。我们通过机器人任务和基于事件的数据集对实验进行了实验验证这种方法,并证明了最先进的性能。
translated by 谷歌翻译
在视觉场景理解中,推断对象的位置及其刚性转换仍然是一个开放的问题。在这里,我们提出了一种使用有效的分解网络的神经形态解决方案,该解决方案基于三个关键概念:(1)基于矢量符号体系结构(VSA)的计算框架,带有复杂值值矢量; (2)分层谐振器网络(HRN)的设计,以处理视觉场景中翻译和旋转的非交换性质,而两者都被组合使用; (3)设计多室尖峰拟态神经元模型,用于在神经形态硬件上实现复杂值的矢量结合。 VSA框架使用矢量结合操作来产生生成图像模型,其中绑定充当了几何变换的模棱两可的操作。因此,场景可以描述为向量产物的总和,从而可以通过谐振器网络有效地分解以推断对象及其姿势。 HRN启用了分区体系结构的定义,其中矢量绑定是一个分区内的水平和垂直翻译,以及另一个分区内的旋转和缩放的定义。尖峰神经元模型允许将谐振网络映射到有效且低功耗的神经形态硬件上。在这项工作中,我们使用由简单的2D形状组成的合成场景展示了我们的方法,经历了刚性的几何变换和颜色变化。同伴论文在现实世界的应用程序方案中为机器视觉和机器人技术展示了这种方法。
translated by 谷歌翻译
用于神经形态计算的生物学启发的尖峰神经元是具有动态状态变量的非线性滤波器 - 与深度学习中使用的无状态神经元模型非常不同。 Notel Intel的神经形态研究处理器Loihi 2的下一个版本支持各种具有完全可编程动态的最有状态尖峰神经元模型。在这里,我们展示了先进的尖峰神经元模型,可用于有效地处理仿真Loihi 2硬件的仿真实验中的流数据。在一个示例中,共振和火(RF)神经元用于计算短时间傅里叶变换(STFT),其具有类似的计算复杂度,但是输出带宽的47倍而不是传统的STFT。在另一个例子中,我们描述了一种使用时间率RF神经元的光学流量估计算法,其需要比传统的基于DNN的解决方案超过90倍。我们还展示了有前途的初步结果,使用BackPropagation培训RF神经元进行音频分类任务。最后,我们表明,跳跃的血管谐振器 - RF神经元的变体 - 重复耳蜗的新特性,并激励一种有效的基于尖峰的谱图编码器。
translated by 谷歌翻译
从理论和应用的角度来看,机器学习算法在资源受限的边缘设备上的部署是一个重要的挑战。在本文中,我们关注的是资源有效的随机连接神经网络,称为随机矢量功能链路(RVFL)网络,因为它们的简单设计和非常快速的训练时间使它们在解决许多应用分类任务方面非常有吸引力。我们建议通过在随机计算区域已知的基于密度的编码来表示输入特征,并利用从高维计算区域的结合和捆绑的操作,以获取隐藏神经元的激活。使用UCI机器学习存储库中的121个现实世界数据集的集合,我们从经验上表明,所提出的方法比常规RVF​​L表现出更高的平均准确性。我们还证明,只能使用有限范围内的整数表示读数矩阵,精度的损失最小。在这种情况下,所提出的方法仅在小的N位整数上运行,这导致了计算高效的体系结构。最后,通过硬件现场可编程的门阵列(FPGA)实现,我们表明,这种方法的消耗大约比常规RVF​​L的能量少约11倍。
translated by 谷歌翻译
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
translated by 谷歌翻译
This paper proposes a novel observer-based controller for Vertical Take-Off and Landing (VTOL) Unmanned Aerial Vehicle (UAV) designed to directly receive measurements from a Vision-Aided Inertial Navigation System (VA-INS) and produce the required thrust and rotational torque inputs. The VA-INS is composed of a vision unit (monocular or stereo camera) and a typical low-cost 6-axis Inertial Measurement Unit (IMU) equipped with an accelerometer and a gyroscope. A major benefit of this approach is its applicability for environments where the Global Positioning System (GPS) is inaccessible. The proposed VTOL-UAV observer utilizes IMU and feature measurements to accurately estimate attitude (orientation), gyroscope bias, position, and linear velocity. Ability to use VA-INS measurements directly makes the proposed observer design more computationally efficient as it obviates the need for attitude and position reconstruction. Once the motion components are estimated, the observer-based controller is used to control the VTOL-UAV attitude, angular velocity, position, and linear velocity guiding the vehicle along the desired trajectory in six degrees of freedom (6 DoF). The closed-loop estimation and the control errors of the observer-based controller are proven to be exponentially stable starting from almost any initial condition. To achieve global and unique VTOL-UAV representation in 6 DoF, the proposed approach is posed on the Lie Group and the design in unit-quaternion is presented. Although the proposed approach is described in a continuous form, the discrete version is provided and tested. Keywords: Vision-aided inertial navigation system, unmanned aerial vehicle, vertical take-off and landing, stochastic, noise, Robotics, control systems, air mobility, observer-based controller algorithm, landmark measurement, exponential stability.
translated by 谷歌翻译
Recent advances in upper limb prostheses have led to significant improvements in the number of movements provided by the robotic limb. However, the method for controlling multiple degrees of freedom via user-generated signals remains challenging. To address this issue, various machine learning controllers have been developed to better predict movement intent. As these controllers become more intelligent and take on more autonomy in the system, the traditional approach of representing the human-machine interface as a human controlling a tool becomes limiting. One possible approach to improve the understanding of these interfaces is to model them as collaborative, multi-agent systems through the lens of joint action. The field of joint action has been commonly applied to two human partners who are trying to work jointly together to achieve a task, such as singing or moving a table together, by effecting coordinated change in their shared environment. In this work, we compare different prosthesis controllers (proportional electromyography with sequential switching, pattern recognition, and adaptive switching) in terms of how they present the hallmarks of joint action. The results of the comparison lead to a new perspective for understanding how existing myoelectric systems relate to each other, along with recommendations for how to improve these systems by increasing the collaborative communication between each partner.
translated by 谷歌翻译
A "heart attack" or myocardial infarction (MI), occurs when an artery supplying blood to the heart is abruptly occluded. The "gold standard" method for imaging MI is Cardiovascular Magnetic Resonance Imaging (MRI), with intravenously administered gadolinium-based contrast (late gadolinium enhancement). However, no "gold standard" fully automated method for the quantification of MI exists. In this work, we propose an end-to-end fully automatic system (MyI-Net) for the detection and quantification of MI in MRI images. This has the potential to reduce the uncertainty due to the technical variability across labs and inherent problems of the data and labels. Our system consists of four processing stages designed to maintain the flow of information across scales. First, features from raw MRI images are generated using feature extractors built on ResNet and MoblieNet architectures. This is followed by the Atrous Spatial Pyramid Pooling (ASPP) to produce spatial information at different scales to preserve more image context. High-level features from ASPP and initial low-level features are concatenated at the third stage and then passed to the fourth stage where spatial information is recovered via up-sampling to produce final image segmentation output into: i) background, ii) heart muscle, iii) blood and iv) scar areas. New models were compared with state-of-art models and manual quantification. Our models showed favorable performance in global segmentation and scar tissue detection relative to state-of-the-art work, including a four-fold better performance in matching scar pixels to contours produced by clinicians.
translated by 谷歌翻译
Increasing popularity of deep-learning-powered applications raises the issue of vulnerability of neural networks to adversarial attacks. In other words, hardly perceptible changes in input data lead to the output error in neural network hindering their utilization in applications that involve decisions with security risks. A number of previous works have already thoroughly evaluated the most commonly used configuration - Convolutional Neural Networks (CNNs) against different types of adversarial attacks. Moreover, recent works demonstrated transferability of the some adversarial examples across different neural network models. This paper studied robustness of the new emerging models such as SpinalNet-based neural networks and Compact Convolutional Transformers (CCT) on image classification problem of CIFAR-10 dataset. Each architecture was tested against four White-box attacks and three Black-box attacks. Unlike VGG and SpinalNet models, attention-based CCT configuration demonstrated large span between strong robustness and vulnerability to adversarial examples. Eventually, the study of transferability between VGG, VGG-inspired SpinalNet and pretrained CCT 7/3x1 models was conducted. It was shown that despite high effectiveness of the attack on the certain individual model, this does not guarantee the transferability to other models.
translated by 谷歌翻译
Spectrum coexistence is essential for next generation (NextG) systems to share the spectrum with incumbent (primary) users and meet the growing demand for bandwidth. One example is the 3.5 GHz Citizens Broadband Radio Service (CBRS) band, where the 5G and beyond communication systems need to sense the spectrum and then access the channel in an opportunistic manner when the incumbent user (e.g., radar) is not transmitting. To that end, a high-fidelity classifier based on a deep neural network is needed for low misdetection (to protect incumbent users) and low false alarm (to achieve high throughput for NextG). In a dynamic wireless environment, the classifier can only be used for a limited period of time, i.e., coherence time. A portion of this period is used for learning to collect sensing results and train a classifier, and the rest is used for transmissions. In spectrum sharing systems, there is a well-known tradeoff between the sensing time and the transmission time. While increasing the sensing time can increase the spectrum sensing accuracy, there is less time left for data transmissions. In this paper, we present a generative adversarial network (GAN) approach to generate synthetic sensing results to augment the training data for the deep learning classifier so that the sensing time can be reduced (and thus the transmission time can be increased) while keeping high accuracy of the classifier. We consider both additive white Gaussian noise (AWGN) and Rayleigh channels, and show that this GAN-based approach can significantly improve both the protection of the high-priority user and the throughput of the NextG user (more in Rayleigh channels than AWGN channels).
translated by 谷歌翻译